Comprehending unstructured text is a challenging task for machines because it involves understanding texts and answering\nquestions. In this paper, we study the multiple-choice task for reading comprehension based on MC Test datasets and Chinese\nreading comprehension datasets, among which Chinese reading comprehension datasets which are built by ourselves. Observing\nthe above-mentioned training sets, we find that ââ?¬Å?sentence comprehensionââ?¬Â is more important than ââ?¬Å?word comprehensionââ?¬Â in\nmultiple-choice task, and therefore we propose sentence-level neural network models. Our model firstly uses LSTMnetwork and a\ncompositionmodel to learn compositional vector representation for sentences and then trains a sentence-level attention model for\nobtaining the sentence-level attention between the sentence embedding in documents and the optional sentences embedding by\ndot product. Finally, a consensus attention is gained by merging individual attention with the merging function. Experimental\nresults show that our model outperforms various state-of-the-art baselines significantly for both the multiple-choice reading\ncomprehension data sets.
Loading....